本文介绍了一种增强的元启发式(ML-ACO),将机器学习(ML)和蚁群优化(ACO)结合起来解决组合优化问题。为了说明我们ML-ACO算法的底层机制,我们首先描述测试问题,定向问题。在这个问题中,目的是找到一个路线,该路线在时间预算中在图中访问顶点的子集,以最大化收集的分数。在我们ML-ACO算法的第一阶段,使用一组小问题实例训练ML模型,其中已知最佳解决方案。具体地,分类模型用于将边缘分类为最佳路由的一部分,或不使用特定于问题的特征和统计测量。然后,训练模型用于预测测试问题实例图表中的边缘所属的概率属于相应的最优路由。在第二阶段,我们将预测的概率纳入我们算法的ACO组件,即,使用概率值作为启发式权重或者热启动信息素矩阵。这里,在构建可行的路线时偏向有利于这些预测的高质量边缘的概率值。我们已经测试了多种分类模型,包括图形神经网络,逻辑回归和支持向量机,实验结果表明,我们的解决方案预测方法一直促进ACO的性能。此外,我们经验证明我们在小型合成实例上培训的ML模型概括为大型合成和现实世界的情况。我们将ML与META-HEURISTIC集成的方法是通用的,可以应用于各种优化问题。
translated by 谷歌翻译
Mathematical models of cognition are often memoryless and ignore potential fluctuations of their parameters. However, human cognition is inherently dynamic, regardless of the reference time scale. Thus, we propose to augment mechanistic cognitive models with a temporal dimension and estimate the resulting dynamics from a superstatistics perspective. In its simplest form, such a model entails a hierarchy between a low-level observation model and a high-level transition model. The observation model describes the local behavior of a system, and the transition model specifies how the parameters of the observation model evolve over time. To overcome the estimation challenges resulting from the complexity of superstatistical models, we develop and validate a simulation-based deep learning method for Bayesian inference, which can recover both time-varying and time-invariant parameters. We first benchmark our method against two existing frameworks capable of estimating time-varying parameters. We then apply our method to fit a dynamic version of the diffusion decision model to long time series of human response times data. Our results show that the deep learning approach is very efficient in capturing the temporal dynamics of the model. Furthermore, we show that the erroneous assumption of static or homogeneous parameters will hide important temporal information.
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
基于最佳抽样的运动计划和轨迹优化是两个竞争框架,以生成最佳运动计划。这两个框架都有互补的属性:基于抽样的计划者通常会趋于趋势,但提供最佳保证。但是,轨迹优化器通常很快就可以收敛,但在非凸问题中不提供全局最佳保证,例如场景有障碍。为了达到两全其美,我们介绍了一个新的计划者Bitkomo,该计划者将渐近最佳的批处理知识树(BIT*)计划者与K-order Markov优化(KOMO)轨迹优化框架集成在一起。我们的计划者随时随地,并保持BIT*提供的相同的渐近优化性保证,同时还利用KOMO轨迹优化器的快速收敛性。我们在实验中评估了我们的计划者在涉及高维配置空间的操作场景方面,最多有两个7-DOF操纵器,障碍物和狭窄的通道。即使Komo失败,Bitkomo的表现也比Komo更好,并且在收敛到最佳解决方案方面,它的表现优于Bit*。
translated by 谷歌翻译
CT和MRI是两种广泛使用的临床成像方式,用于非侵入性诊断。然而,这两种方式都有一定的问题。 CT使用有害电离辐射,MRI患有缓慢的采集速度。欠采样可以解决这两个问题,例如稀疏抽样。然而,这种向下采样的数据导致降低分辨率并引入人工制品。已经提出了几种技术,包括基于深度的学习方法,以重建此类数据。然而,这两个方式的欠采样重建问题总是被认为是两个不同的问题,并通过不同的研究工作分开解决。本文通过在径向MRI上应用傅立叶变换的预处理来实现稀疏CT和缺口MRI重建的统一解决方案,然后使用SCOMAGE ups采样与滤波后投影结合使用SCOMAGE Cups采样来实现的基于傅里叶变换的预处理。原始网络是一种基于深度学习的方法,用于重建稀疏采样的CT数据。本文介绍了原始 - 双工UNET,从精度和重建速度方面提高了原始双网络。所提出的方法导致平均SSSIM为0.932,同时对风扇束几何进行稀疏CT重建,其稀疏水平为16,实现了对先前模型的统计上显着的改进,这导致0.919。此外,所提出的模型导致0.903和0.957平均SSIM,同时重建具有16-统计上显着改善的加速因子,在原始模型上重建了缺乏采样的脑和腹部MRI数据,这导致0.867和0.949。最后,本文表明,所提出的网络不仅提高了整体图像质量,而且还提高了兴趣区域的图像质量;以及在针的存在下更好地推广。
translated by 谷歌翻译
柱生成(CG)是解决大规模优化问题的有效方法。CG通过求解列(即变量)的子集并逐渐包括可以改善当前子问题的解决方案的新列。通过反复解决定价问题,根据需要产生新列,这通常是NP - 硬的并且是CG方法的瓶颈。为了解决这个问题,我们提出了一种基于机器学习的定价启发式(MLPH),可以有效地产生许多高质量的柱。在CG的每次迭代中,我们的MLPH利用ML模型来预测定价问题的最佳解决方案,然后用于引导采样方法以有效地产生多个高质量柱。使用图形着色问题,我们经验证明,与六种最先进的方法相比,MLPH显着增强,并且CG的改善可能导致分支和价格精确方法的显着更好的性能。
translated by 谷歌翻译
由于其简单性和最先进的性能,神经辐射场(NERF)被出现为新型视图综合任务的强大表示。虽然NERF可以在许多输入视图可用时产生看不见的观点的光静观渲染,但是当该数量减少时,其性能显着下降。我们观察到,稀疏输入方案中的大多数伪像是由估计场景几何中的错误引起的,并且在训练开始时通过不同的行为引起。我们通过规范从未观察的视点呈现的修补程序的几何和外观来解决这一点,并在训练期间退火光线采样空间。我们还使用规范化的流模型来规范未观察的视点的颜色。我们的车型不仅优于优化单个场景的其他方法,而是在许多情况下,还有条件模型,这些模型在大型多视图数据集上广泛预先培训。
translated by 谷歌翻译
While the capabilities of autonomous systems have been steadily improving in recent years, these systems still struggle to rapidly explore previously unknown environments without the aid of GPS-assisted navigation. The DARPA Subterranean (SubT) Challenge aimed to fast track the development of autonomous exploration systems by evaluating their performance in real-world underground search-and-rescue scenarios. Subterranean environments present a plethora of challenges for robotic systems, such as limited communications, complex topology, visually-degraded sensing, and harsh terrain. The presented solution enables long-term autonomy with minimal human supervision by combining a powerful and independent single-agent autonomy stack, with higher level mission management operating over a flexible mesh network. The autonomy suite deployed on quadruped and wheeled robots was fully independent, freeing the human supervision to loosely supervise the mission and make high-impact strategic decisions. We also discuss lessons learned from fielding our system at the SubT Final Event, relating to vehicle versatility, system adaptability, and re-configurable communications.
translated by 谷歌翻译
We present Muse, a text-to-image Transformer model that achieves state-of-the-art image generation performance while being significantly more efficient than diffusion or autoregressive models. Muse is trained on a masked modeling task in discrete token space: given the text embedding extracted from a pre-trained large language model (LLM), Muse is trained to predict randomly masked image tokens. Compared to pixel-space diffusion models, such as Imagen and DALL-E 2, Muse is significantly more efficient due to the use of discrete tokens and requiring fewer sampling iterations; compared to autoregressive models, such as Parti, Muse is more efficient due to the use of parallel decoding. The use of a pre-trained LLM enables fine-grained language understanding, translating to high-fidelity image generation and the understanding of visual concepts such as objects, their spatial relationships, pose, cardinality etc. Our 900M parameter model achieves a new SOTA on CC3M, with an FID score of 6.06. The Muse 3B parameter model achieves an FID of 7.88 on zero-shot COCO evaluation, along with a CLIP score of 0.32. Muse also directly enables a number of image editing applications without the need to fine-tune or invert the model: inpainting, outpainting, and mask-free editing. More results are available at https://muse-model.github.io
translated by 谷歌翻译
Optical coherence tomography (OCT) captures cross-sectional data and is used for the screening, monitoring, and treatment planning of retinal diseases. Technological developments to increase the speed of acquisition often results in systems with a narrower spectral bandwidth, and hence a lower axial resolution. Traditionally, image-processing-based techniques have been utilized to reconstruct subsampled OCT data and more recently, deep-learning-based methods have been explored. In this study, we simulate reduced axial scan (A-scan) resolution by Gaussian windowing in the spectral domain and investigate the use of a learning-based approach for image feature reconstruction. In anticipation of the reduced resolution that accompanies wide-field OCT systems, we build upon super-resolution techniques to explore methods to better aid clinicians in their decision-making to improve patient outcomes, by reconstructing lost features using a pixel-to-pixel approach with an altered super-resolution generative adversarial network (SRGAN) architecture.
translated by 谷歌翻译